University of Texas at Austin

Upcoming Event: Oden Institute & Dept. of Statistics and Data Sciences

Learning Theory for the AI for Science Era

Ambuj Tewari, Professor, University of Michigan

3:30 – 5PM
Tuesday Feb 3, 2026

POB 6.304 and Zoom

Abstract

Many problems in AI for Science can be viewed as learning an operator, a map from one space of functions to another. Examples include learning fast surrogates for PDE solvers and other scientific simulations. These problems break existing learning theory paradigms because the outputs are infinite-dimensional and only partially observed through discretization.

In this talk, I argue that operator learning is a genuinely new learning-theoretic problem in which intuitions and techniques from finite-dimensional settings break down. I will first explain why classical multi-output learning results fail to extend to infinite-dimensional outputs. I will then discuss what governs learnability in the simplest setting of linear operators.

I next show that operator learning reveals new sources of error not seen in classical learning theory. These include errors arising from truncating infinite basis expansions as well as discretization errors. Finally, I demonstrate how changing the data collection protocol from passive to active can dramatically alter what is learnable. Active data collection is especially natural in scientific applications where simulations can be run for user-specified initial conditions.

I conclude by pointing to directions for further exciting work, including time generalization, zero-shot super-resolution, and learning protocols that expose more internal details of the scientific solver.

Biography

Ambuj Tewari is a Professor in the Department of Statistics and the Department of Electrical Engineering and Computer Science (by courtesy) at the University of Michigan, Ann Arbor. He is affiliated with the Michigan Center for Applied and Interdisciplinary Mathematics (MCAIM) and the Michigan Institute for Data & AI in Society (MIDAS).

His research interests lie in machine learning, with an emphasis on statistical learning theory, online learning, reinforcement learning and control, and optimization. He also collaborates closely with domain scientists to develop principled machine learning methods for applications in the behavioral sciences, psychiatry, and the chemical sciences, with a recent focus on AI for Science and operator learning.

His research has been recognized with multiple paper awards, including Best Paper Awards at the Conference on Learning Theory (COLT 2005, COLT 2011) and an Outstanding Paper Award at the International Conference on Algorithmic Learning Theory (ALT 2025). He received an NSF CAREER Award in 2015 and a Sloan Research Fellowship in Computer Science in 2017. He was named a Fellow of the Institute of Mathematical Statistics (IMS) in 2022 and received the Early Career Award in Statistics and Data Sciences from the International Indian Statistical Association in 2023. In 2025, he received the Individual Award for Outstanding Contributions to Undergraduate Education from the University of Michigan College of Literature, Science, and the Arts.

Learning Theory for the AI for Science Era

Event information

Date
3:30 – 5PM
Tuesday Feb 3, 2026
Hosted by